Chapter 4

Today is

date()
## [1] "Tue Nov 17 20:14:20 2020"

Analysis exercise

1. Create new Rmarkdown file

done!

2. Load the Boston Data

Let us load the data and then summarize and provide a glimpse into the data.

library(dplyr); library (MASS)
glimpse(Boston) 
## Rows: 506
## Columns: 14
## $ crim    <dbl> 0.00632, 0.02731, 0.02729, 0.03237, 0.06905, 0.02985, 0.08829…
## $ zn      <dbl> 18.0, 0.0, 0.0, 0.0, 0.0, 0.0, 12.5, 12.5, 12.5, 12.5, 12.5, …
## $ indus   <dbl> 2.31, 7.07, 7.07, 2.18, 2.18, 2.18, 7.87, 7.87, 7.87, 7.87, 7…
## $ chas    <int> 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0…
## $ nox     <dbl> 0.538, 0.469, 0.469, 0.458, 0.458, 0.458, 0.524, 0.524, 0.524…
## $ rm      <dbl> 6.575, 6.421, 7.185, 6.998, 7.147, 6.430, 6.012, 6.172, 5.631…
## $ age     <dbl> 65.2, 78.9, 61.1, 45.8, 54.2, 58.7, 66.6, 96.1, 100.0, 85.9, …
## $ dis     <dbl> 4.0900, 4.9671, 4.9671, 6.0622, 6.0622, 6.0622, 5.5605, 5.950…
## $ rad     <int> 1, 2, 2, 3, 3, 3, 5, 5, 5, 5, 5, 5, 5, 4, 4, 4, 4, 4, 4, 4, 4…
## $ tax     <dbl> 296, 242, 242, 222, 222, 222, 311, 311, 311, 311, 311, 311, 3…
## $ ptratio <dbl> 15.3, 17.8, 17.8, 18.7, 18.7, 18.7, 15.2, 15.2, 15.2, 15.2, 1…
## $ black   <dbl> 396.90, 396.90, 392.83, 394.63, 396.90, 394.12, 395.60, 396.9…
## $ lstat   <dbl> 4.98, 9.14, 4.03, 2.94, 5.33, 5.21, 12.43, 19.15, 29.93, 17.1…
## $ medv    <dbl> 24.0, 21.6, 34.7, 33.4, 36.2, 28.7, 22.9, 27.1, 16.5, 18.9, 1…
summary(Boston)
##       crim                zn             indus            chas        
##  Min.   : 0.00632   Min.   :  0.00   Min.   : 0.46   Min.   :0.00000  
##  1st Qu.: 0.08205   1st Qu.:  0.00   1st Qu.: 5.19   1st Qu.:0.00000  
##  Median : 0.25651   Median :  0.00   Median : 9.69   Median :0.00000  
##  Mean   : 3.61352   Mean   : 11.36   Mean   :11.14   Mean   :0.06917  
##  3rd Qu.: 3.67708   3rd Qu.: 12.50   3rd Qu.:18.10   3rd Qu.:0.00000  
##  Max.   :88.97620   Max.   :100.00   Max.   :27.74   Max.   :1.00000  
##       nox               rm             age              dis        
##  Min.   :0.3850   Min.   :3.561   Min.   :  2.90   Min.   : 1.130  
##  1st Qu.:0.4490   1st Qu.:5.886   1st Qu.: 45.02   1st Qu.: 2.100  
##  Median :0.5380   Median :6.208   Median : 77.50   Median : 3.207  
##  Mean   :0.5547   Mean   :6.285   Mean   : 68.57   Mean   : 3.795  
##  3rd Qu.:0.6240   3rd Qu.:6.623   3rd Qu.: 94.08   3rd Qu.: 5.188  
##  Max.   :0.8710   Max.   :8.780   Max.   :100.00   Max.   :12.127  
##       rad              tax           ptratio          black       
##  Min.   : 1.000   Min.   :187.0   Min.   :12.60   Min.   :  0.32  
##  1st Qu.: 4.000   1st Qu.:279.0   1st Qu.:17.40   1st Qu.:375.38  
##  Median : 5.000   Median :330.0   Median :19.05   Median :391.44  
##  Mean   : 9.549   Mean   :408.2   Mean   :18.46   Mean   :356.67  
##  3rd Qu.:24.000   3rd Qu.:666.0   3rd Qu.:20.20   3rd Qu.:396.23  
##  Max.   :24.000   Max.   :711.0   Max.   :22.00   Max.   :396.90  
##      lstat            medv      
##  Min.   : 1.73   Min.   : 5.00  
##  1st Qu.: 6.95   1st Qu.:17.02  
##  Median :11.36   Median :21.20  
##  Mean   :12.65   Mean   :22.53  
##  3rd Qu.:16.95   3rd Qu.:25.00  
##  Max.   :37.97   Max.   :50.00

The data contain 506 observations, which each apparently represent different census tracts in Boston. The data have 14 variables that describe various characteristics of these census tracts like crime rate, tax rate, and a median value of homes in the census tract.

3. Graphical overview

I start by drawing some scatter plots.

library(tidyr); library(dplyr); library(ggplot2); library(GGally)
gather(Boston) %>% ggplot(aes(value)) + facet_wrap("key", scales = "free") + geom_bar()

3.1 Histograms

Since the data contain so many variables, most of the histograms are relatively small. However, we can at least say that there seems to be quite a bit of variation in age, nox and medv variables. ### 3.1 Scatter and correlations plots

library("ggplot2"); library("GGally");library(corrplot)
pairs(Boston)

cMatrix = cor(Boston) %>% round(digits=2)
corrplot(cMatrix, method = "ellipse", type= "upper")

Since the correlation plot with scatter plots is messy, I will concentrate just on the latter figure, which shows only the correlations between variables. Based on the figure, some variables like lower statuts of the population (lstat) and median value of homes (medv) are very negatively correlated, some variables like proportion of non-retail business (indus) and emissons (nox) are very positively correlated, whereas between some variables like there do not exist any relationship.

4. Standardize the data

4.1 Standardization

library("ggplot2"); library("GGally");library(corrplot)
bostonStand <-scale(Boston)
bostonStand <- as.data.frame(bostonStand)
summary(bostonStand)
##       crim                 zn               indus              chas        
##  Min.   :-0.419367   Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723  
##  1st Qu.:-0.410563   1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723  
##  Median :-0.390280   Median :-0.48724   Median :-0.2109   Median :-0.2723  
##  Mean   : 0.000000   Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.007389   3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723  
##  Max.   : 9.924110   Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648  
##       nox                rm               age               dis         
##  Min.   :-1.4644   Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658  
##  1st Qu.:-0.9121   1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049  
##  Median :-0.1441   Median :-0.1084   Median : 0.3171   Median :-0.2790  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.5981   3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617  
##  Max.   : 2.7296   Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566  
##       rad               tax             ptratio            black        
##  Min.   :-0.9819   Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033  
##  1st Qu.:-0.6373   1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049  
##  Median :-0.5225   Median :-0.4642   Median : 0.2746   Median : 0.3808  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.6596   3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332  
##  Max.   : 1.6596   Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406  
##      lstat              medv        
##  Min.   :-1.5296   Min.   :-1.9063  
##  1st Qu.:-0.7986   1st Qu.:-0.5989  
##  Median :-0.1811   Median :-0.1449  
##  Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.6024   3rd Qu.: 0.2683  
##  Max.   : 3.5453   Max.   : 2.9865

We can see that after the standardization procedure, each variable is centered around zero. (mean is zero). Let us then create a new categorical variable using the existing crime variable.

library("ggplot2"); library("GGally");library(corrplot); library(dplyr)
bins <- quantile(bostonStand$crim)
crime <-cut(bostonStand$crim, breaks = bins, include.lowest = TRUE, labels = c("low", "med_low", "med_high", "high") )
bostonStand <- dplyr::select(bostonStand , -crim)
summary(bostonStand)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv        
##  Min.   :-1.9063  
##  1st Qu.:-0.5989  
##  Median :-0.1449  
##  Mean   : 0.0000  
##  3rd Qu.: 0.2683  
##  Max.   : 2.9865
bostonStand$crime <- crime
summary(bostonStand)
##        zn               indus              chas              nox         
##  Min.   :-0.48724   Min.   :-1.5563   Min.   :-0.2723   Min.   :-1.4644  
##  1st Qu.:-0.48724   1st Qu.:-0.8668   1st Qu.:-0.2723   1st Qu.:-0.9121  
##  Median :-0.48724   Median :-0.2109   Median :-0.2723   Median :-0.1441  
##  Mean   : 0.00000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.04872   3rd Qu.: 1.0150   3rd Qu.:-0.2723   3rd Qu.: 0.5981  
##  Max.   : 3.80047   Max.   : 2.4202   Max.   : 3.6648   Max.   : 2.7296  
##        rm               age               dis               rad         
##  Min.   :-3.8764   Min.   :-2.3331   Min.   :-1.2658   Min.   :-0.9819  
##  1st Qu.:-0.5681   1st Qu.:-0.8366   1st Qu.:-0.8049   1st Qu.:-0.6373  
##  Median :-0.1084   Median : 0.3171   Median :-0.2790   Median :-0.5225  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 0.4823   3rd Qu.: 0.9059   3rd Qu.: 0.6617   3rd Qu.: 1.6596  
##  Max.   : 3.5515   Max.   : 1.1164   Max.   : 3.9566   Max.   : 1.6596  
##       tax             ptratio            black             lstat        
##  Min.   :-1.3127   Min.   :-2.7047   Min.   :-3.9033   Min.   :-1.5296  
##  1st Qu.:-0.7668   1st Qu.:-0.4876   1st Qu.: 0.2049   1st Qu.:-0.7986  
##  Median :-0.4642   Median : 0.2746   Median : 0.3808   Median :-0.1811  
##  Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000   Mean   : 0.0000  
##  3rd Qu.: 1.5294   3rd Qu.: 0.8058   3rd Qu.: 0.4332   3rd Qu.: 0.6024  
##  Max.   : 1.7964   Max.   : 1.6372   Max.   : 0.4406   Max.   : 3.5453  
##       medv              crime    
##  Min.   :-1.9063   low     :127  
##  1st Qu.:-0.5989   med_low :126  
##  Median :-0.1449   med_high:126  
##  Mean   : 0.0000   high    :127  
##  3rd Qu.: 0.2683                 
##  Max.   : 2.9865

4.2 Train and test sets

Let us create training and test sets and then have a glimpse at both of them.

library("ggplot2");library(corrplot); library(dplyr)
nObs <- nrow(bostonStand)
indTrain <- sample(nObs , size = nObs * 0.8)
# Training set 
trainBoston <- bostonStand[indTrain,]
# Test set 
testBoston <- bostonStand[-indTrain,]

glimpse(trainBoston) 
## Rows: 404
## Columns: 14
## $ zn      <dbl> 0.4560568, 3.3717021, 1.4422310, 2.7285450, -0.4872402, -0.48…
## $ indus   <dbl> -0.76917014, -1.19043127, -1.12192167, -1.19334657, -0.079701…
## $ chas    <dbl> -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0.2723291, -…
## $ nox     <dbl> -1.06746238, -1.33498587, -1.01568364, -1.09335175, -0.566934…
## $ rm      <dbl> 0.21116139, 1.14339028, 0.64667596, 0.44172792, 0.12861288, -…
## $ age     <dbl> -0.6918540, -1.6972232, -1.3419691, -1.6616978, -1.2886809, 0…
## $ dis     <dbl> 1.914535748, 1.667968097, 1.274989030, 0.762715291, 0.0714045…
## $ rad     <dbl> -0.2927910, -0.9818712, -0.5224844, -0.7521778, -0.6373311, -…
## $ tax     <dbl> -0.46421320, -0.73121670, -0.06074124, -0.92701927, -0.778683…
## $ ptratio <dbl> 0.29768250, -1.45755797, -1.50374851, -0.07184181, 0.06672981…
## $ black   <dbl> 0.19755732, 0.41673722, 0.36186010, 0.42670493, 0.31914137, 0…
## $ lstat   <dbl> -0.438739148, -0.672597937, -1.115109179, -1.166922204, -0.45…
## $ medv    <dbl> 0.21389273, 1.05111278, 0.86627199, 0.89889095, 0.60532029, -…
## $ crime   <fct> med_low, low, low, low, med_low, low, med_high, med_high, med…
glimpse(testBoston) 
## Rows: 102
## Columns: 14
## $ zn      <dbl> -0.48724019, 0.04872402, -0.48724019, -0.48724019, -0.4872401…
## $ indus   <dbl> -1.3055857, -0.4761823, -0.4368257, -0.4368257, -0.4368257, -…
## $ chas    <dbl> -0.2723291, -0.2723291, -0.2723291, -0.2723291, -0.2723291, -…
## $ nox     <dbl> -0.83445805, -0.26489191, -0.14407485, -0.14407485, -0.144074…
## $ rm      <dbl> 1.227362043, -0.930285282, -0.641365488, -1.017103545, -0.454…
## $ age     <dbl> -0.51067434, 1.11638970, -0.42896588, 1.04889141, 0.73271521,…
## $ dis     <dbl> 1.076671135, 1.086121629, 0.334118787, 0.001356935, 0.1031753…
## $ rad     <dbl> -0.7521778, -0.5224844, -0.6373311, -0.6373311, -0.6373311, -…
## $ tax     <dbl> -1.10502160, -0.57694801, -0.60068166, -0.60068166, -0.600681…
## $ ptratio <dbl> 0.11292035, -1.50374851, 1.17530274, 1.17530274, 1.17530274, …
## $ black   <dbl> 0.4406159, 0.3281233, 0.4265954, 0.2179309, 0.3927490, 0.4147…
## $ lstat   <dbl> -1.02548665, 2.41937935, -0.58577611, 1.17166569, 0.16481258,…
## $ medv    <dbl> 1.48603229, -0.65594629, -0.28626471, -0.97126294, -0.3188836…
## $ crime   <fct> low, med_low, med_high, med_high, med_high, med_high, med_hig…

5. Discriminant analysis

library("ggplot2");library(corrplot); library(dplyr)

lda.fit <- lda( crime ~ . , data = trainBoston )

lda.arrows <- function(x, myscale = 1, arrow_heads = 0.1, color = "red", tex = 0.75, choices = c(1,2)){
  heads <- coef(x)
  arrows(x0 = 0, y0 = 0, 
         x1 = myscale * heads[,choices[1]], 
         y1 = myscale * heads[,choices[2]], col=color, length = arrow_heads)
  text(myscale * heads[,choices], labels = row.names(heads), 
       cex = tex, col=color, pos=3)
}

# target classes as numeric
classes <- as.numeric(trainBoston$crime)

plot(lda.fit , dimen = 2,  col = classes, pch = classes)
lda.arrows(lda.fit, myscale = 1)

6. Predict and cross tabulate

Let us first save the initial values of the crime variable of test data into a separate vector.

library("ggplot2");library(corrplot); library(dplyr)

# save the initial values of crime vectors 
orginalCrime <- testBoston$crime 

testBoston <- dplyr::select(testBoston, -crime)

Then let us predict the values of the crime variable in the test data using the model we found using the training data. Finally, we compare the predicted and actual values of the crime variable.

library("ggplot2");library(corrplot); library(dplyr)

lda.pred <- predict(lda.fit, newdata =  testBoston)

table(correct = orginalCrime, lda.pred$class )
##           
## correct    low med_low med_high high
##   low       11       8        3    0
##   med_low    5      16        8    0
##   med_high   1       6       15    0
##   high       0       0        0   29

The diagonal line from the left top corner to the bottom right corner shows the number of correct predictions. Based on the table, our model performs ok. The model does very well at predicting extreme values (low and high crimes) while there seem to be more wrong guesses when predicting values are closer to the median.

7. Distances and K-means algorithm

7.1 Distances

First we re-load the data and standardize the variables. Then I calculate euclidean distances between variables.

library(dplyr); library (MASS)
boston<-scale(Boston)
boston <- as.data.frame(boston)

distE <- dist(boston)
summary(distE )
##    Min. 1st Qu.  Median    Mean 3rd Qu.    Max. 
##  0.1343  3.4625  4.8241  4.9111  6.1863 14.3970

7.2 K-means algorithm

I first perform k-mean clustering with 7 clusters. Note that I pick the number of clusters just randomly.

library(dplyr); library (MASS)
kM <- kmeans(Boston, centers = 7)
pairs(boston[1:5], col = kM$cluster )

pairs(boston[6:10], col = kM$cluster )

pairs(boston[11:14], col = kM$cluster )

Let us then be a bit formal and find the optimal number of clusters by calculating the total of within-cluster sum of squares (WCSS).

library(dplyr); library (MASS); library("ggplot2");
set.seed(123)

k_max <- 10

twcss <- sapply(1:k_max, function(k){kmeans(boston, k)$tot.withinss})

qplot(x = 1:k_max, y = twcss, geom = 'line')

Based on the figure, it seems that the WCSS drops heavily when the number of clusters is 2. Let us perform the k-mean clustering using two clusters and plot the results

library(dplyr); library (MASS)
kM <- kmeans(Boston, centers = 2)
pairs(boston[1:5], col = kM$cluster )

pairs(boston[6:10], col = kM$cluster )

pairs(boston[11:14], col = kM$cluster )